# Harm Classification
MD Judge V0 2 Internlm2 7b
Apache-2.0
A safety protection tool fine-tuned based on internlm2-7b-chat, providing human-readable judgment explanations and fine-grained harm scoring
Large Language Model
Transformers English

M
OpenSafetyLab
1,823
15
Shieldgemma 9b
ShieldGemma is a series of safety content moderation models based on Gemma 2, designed to moderate content across four harm categories (sexual content, dangerous content, hate speech, and harassment).
Large Language Model
Transformers

S
google
507
22
Featured Recommended AI Models